Anthropic’s Claude AI Exposes Long-Overlooked Code Vulnerabilities, Shaking Cybersecurity Sector
Wall Street's confidence in traditional cybersecurity stocks is eroding as Anthropic's Claude AI demonstrates unprecedented code-review capabilities. The San Francisco-based AI firm has deployed Claude Code Security—a tool that analyzes codebases with human-like reasoning—to enterprise clients, with expedited access for open-source maintainers.
Unlike conventional scanners limited to pattern recognition, Claude traces data flows across entire systems, uncovering vulnerabilities that evade standard detection. The system employs multi-stage verification and severity ranking, prioritizing critical issues for developer attention. Running on the new Claude Opus 4.6 model, the AI has already identified over 500 latent vulnerabilities in live codebases, some persisting for decades despite expert reviews.
Anthropic's Frontier Red Team continues stress-testing the model while notifying affected projects. Internal deployments show promising results, suggesting AI may soon audit a significant portion of global code. This technological leap threatens to disrupt legacy security providers as institutional investors reassess market positions.